In this paper we study the smooth strongly convex minimization problem $\min_{x}\min_y f(x,y)$. The existing optimal first-order methods require $\mathcal{O}(\sqrt{\max\{\kappa_x,\kappa_y\}} \log 1/\epsilon)$ of computations of both $\nabla_x f(x,y)$ and $\nabla_y f(x,y)$, where $\kappa_x$ and $\kappa_y$ are condition numbers with respect to variable blocks $x$ and $y$. We propose a new algorithm that only requires $\mathcal{O}(\sqrt{\kappa_x} \log 1/\epsilon)$ of computations of $\nabla_x f(x,y)$ and $\mathcal{O}(\sqrt{\kappa_y} \log 1/\epsilon)$ computations of $\nabla_y f(x,y)$. In some applications $\kappa_x \gg \kappa_y$, and computation of $\nabla_y f(x,y)$ is significantly cheaper than computation of $\nabla_x f(x,y)$. In this case, our algorithm substantially outperforms the existing state-of-the-art methods.
translated by 谷歌翻译
The celebrated FedAvg algorithm of McMahan et al. (2017) is based on three components: client sampling (CS), data sampling (DS) and local training (LT). While the first two are reasonably well understood, the third component, whose role is to reduce the number of communication rounds needed to train the model, resisted all attempts at a satisfactory theoretical explanation. Malinovsky et al. (2022) identified four distinct generations of LT methods based on the quality of the provided theoretical communication complexity guarantees. Despite a lot of progress in this area, none of the existing works were able to show that it is theoretically better to employ multiple local gradient-type steps (i.e., to engage in LT) than to rely on a single local gradient-type step only in the important heterogeneous data regime. In a recent breakthrough embodied in their ProxSkip method and its theoretical analysis, Mishchenko et al. (2022) showed that LT indeed leads to provable communication acceleration for arbitrarily heterogeneous data, thus jump-starting the $5^{\rm th}$ generation of LT methods. However, while these latest generation LT methods are compatible with DS, none of them support CS. We resolve this open problem in the affirmative. In order to do so, we had to base our algorithmic development on new algorithmic and theoretical foundations.
translated by 谷歌翻译
在本文中,我们提出了一种称为MINIBATCH随机三点(MISTP)方法的新的零订单优化方法,以在只有目标函数评估的近似值的情况下解决无约束的最小化问题。它基于最近提出的随机三点(STP)方法(Bergou等,2020)。在每次迭代中,MISTP以与STP相似的方式生成一个随机搜索方向,但是仅根据目标函数的近似而不是其精确评估选择下一个迭代。我们还分析了方法在非凸和凸病例中的复杂性,并评估其在多个机器学习任务上的性能。
translated by 谷歌翻译
我们研究基于{\ em本地培训(LT)}范式的分布式优化方法:通过在参数平均之前对客户进行基于本地梯度的培训来实现沟通效率。回顾田地的进度,我们{\ em识别5代LT方法}:1)启发式,2)均匀,3)sublinear,4)线性和5)加速。由Mishchenko,Malinovsky,Stich和Richt \'{A} Rik(2022)发起的5 $ {}^{\ rm th} $生成,由Proxskip方法发起通信加速机制。受到最近进度的启发,我们通过证明可以使用{\ em差异}进一步增强它们,为5 $ {}^{\ rm th} $生成LT方法的生成。尽管LT方法的所有以前的所有理论结果都完全忽略了本地工作的成本,并且仅根据交流回合的数量而被构成,但我们证明我们的方法在{\ em总培训成本方面都比{\ em em总培训成本}大得多当本地计算足够昂贵时,在制度中的理论和实践中,最先进的方法是proxskip。我们从理论上表征了这个阈值,并通过经验结果证实了我们的理论预测。
translated by 谷歌翻译
梯度压缩是一种流行的技术,可改善机器学习模型分布式培训中随机一阶方法的沟通复杂性。但是,现有作品仅考虑随机梯度的替换采样。相比之下,在实践中众所周知,最近从理论上证实,基于没有替代抽样的随机方法,例如随机改组方法(RR)方法,其性能要比用更换梯度进行梯度的方法更好。在这项工作中,我们在文献中缩小了这一差距,并通过梯度压缩和没有替代抽样的方法提供了第一次分析方法。我们首先使用梯度压缩(Q-RR)开发一个随机重新填充的分布式变体,并展示如何通过使用控制迭代来减少梯度量化的方差。接下来,为了更好地适合联合学习应用程序,我们结合了本地计算,并提出了一种称为Q-Nastya的Q-RR的变体。 Q-Nastya使用本地梯度步骤以及不同的本地和全球步骤。接下来,我们还展示了如何在此设置中减少压缩差异。最后,我们证明了所提出的方法的收敛结果,并概述了它们在现有算法上改进的几种设置。
translated by 谷歌翻译
The increased importance of mobile photography created a need for fast and performant RAW image processing pipelines capable of producing good visual results in spite of the mobile camera sensor limitations. While deep learning-based approaches can efficiently solve this problem, their computational requirements usually remain too large for high-resolution on-device image processing. To address this limitation, we propose a novel PyNET-V2 Mobile CNN architecture designed specifically for edge devices, being able to process RAW 12MP photos directly on mobile phones under 1.5 second and producing high perceptual photo quality. To train and to evaluate the performance of the proposed solution, we use the real-world Fujifilm UltraISP dataset consisting on thousands of RAW-RGB image pairs captured with a professional medium-format 102MP Fujifilm camera and a popular Sony mobile camera sensor. The results demonstrate that the PyNET-V2 Mobile model can substantially surpass the quality of tradition ISP pipelines, while outperforming the previously introduced neural network-based solutions designed for fast image processing. Furthermore, we show that the proposed architecture is also compatible with the latest mobile AI accelerators such as NPUs or APUs that can be used to further reduce the latency of the model to as little as 0.5 second. The dataset, code and pre-trained models used in this paper are available on the project website: https://github.com/gmalivenko/PyNET-v2
translated by 谷歌翻译
Various depth estimation models are now widely used on many mobile and IoT devices for image segmentation, bokeh effect rendering, object tracking and many other mobile tasks. Thus, it is very crucial to have efficient and accurate depth estimation models that can run fast on low-power mobile chipsets. In this Mobile AI challenge, the target was to develop deep learning-based single image depth estimation solutions that can show a real-time performance on IoT platforms and smartphones. For this, the participants used a large-scale RGB-to-depth dataset that was collected with the ZED stereo camera capable to generated depth maps for objects located at up to 50 meters. The runtime of all models was evaluated on the Raspberry Pi 4 platform, where the developed solutions were able to generate VGA resolution depth maps at up to 27 FPS while achieving high fidelity results. All models developed in the challenge are also compatible with any Android or Linux-based mobile devices, their detailed description is provided in this paper.
translated by 谷歌翻译
强化学习仍然是控制工程和机器学习当代发展的主要方向之一。精美的直觉,灵活的设置,易于应用是此方法的许多好处。从机器学习的角度来看,强化学习代理人的主要优势在于它``捕获''(学习)在给定环境中的最佳行为。通常,代理人是基于神经网络的,正是其近似能力才能使其近似能力引起上述信念。但是,从控制工程的角度来看,强化学习具有严重的缺陷。最重要的是缺乏稳定性的保证,对环境环境的封闭环路封闭循环。旨在稳定增强学习。说到稳定,著名的莱普诺夫理论是事实上的工具。因此,毫无疑问,稳定强化学习的许多技术以一种或另一种方式依赖莱普诺夫理论。在控制理论中,有一个稳定控制器和Lyapunov功能之间的复杂联系。因此,采用这种同对似乎对设计稳定增强l非常有吸引力赚取。但是,Lyapunov函数的计算通常是一个繁琐的过程。在本说明中,我们展示了如何构建根本不采用这种功能的稳定增强学习剂。我们只假设存在Lyapunov功能,如果给定系统(读取:环境)可以稳定,这是自然而然的事情,但是我们不需要计算一个。
translated by 谷歌翻译
控制Lyapunov功能是稳定的中心工具。它将抽象的能量函数(lyapunov函数)概括为受控系统的情况。众所周知的事实是,大多数控制的Lyapunov函数都是非平滑的 - 在非全面系统中,例如轮式机器人和汽车也是如此。存在使用非平滑控制Lyapunov功能的稳定框架,例如DINI瞄准和最陡峭的下降。这项工作将相关结果推广到随机情况。作为基础工作,选择了采样控制方案,其中使用系统状态的离散测量在离散时刻计算控制动作。在这样的设置中,应特别注意控制Lyapunov功能的样本对样本行为。这里的一个特殊挑战是在系统上作用的随机噪声。这项工作的核心结果是一个定理,该定理大致指出,如果通常有一个不平滑的控制lyapunov函数,则可以在样本和持续模式下实际稳定给定的随机动力学系统,这意味着控制在抽样时间步骤中保持动作不变。选择的一种特定的控制方法是基于莫罗 - 耶西达的正则化,换句话说是对照lyapunov函数的Inf-consonvolution,但总体框架可扩展到进一步的控制方案。假定,尽管短暂地解决了无限噪声的情况,但几乎肯定会肯定会界定系统噪声。
translated by 谷歌翻译
这是对纸张“渐近稳定的适应性最优控制算法的简短评论,VAMVoudakis等人的”具有饱和致动器的渐近稳定的自适应 - 最优控制算法“。强化学习(RL)代理人的稳定性问题仍然很难,并且上述工作建议使用来自自适应控制的技术的合适稳定性属性 - 一个旨在添加到行动的强制性术语。但是,这种方法存在稳定RL的方法,我们将在本说明中解释。此外,Vamvoudakis等人。在通用政策下似乎在汉密尔顿时期已经造成了荒谬的假设。为了提供积极的结果,我们不仅会表明这个错误,而且表明了评论在随机连续环境下的批评神经网络权重聚,为行为政策持有提供了某些条件。
translated by 谷歌翻译